logo
#

Latest news with #artificial intelligence

NSF Gives Georgia Tech $20 Million To Build AI-Focused Supercomputer
NSF Gives Georgia Tech $20 Million To Build AI-Focused Supercomputer

Forbes

time11 hours ago

  • Science
  • Forbes

NSF Gives Georgia Tech $20 Million To Build AI-Focused Supercomputer

The Georgia Institute of Technology is building a new supercomputer that will advance the nation's ... More capacity to use artificial intelligence. The National Science Foundation has awarded the Georgia Institute of Technology $20 million to lead the construction of a new supercomputer — named Nexus — that will use artificial intelligence to advance scientific breakthroughs. According to the NSF announcement of the award, Nexus will provide 'a critical national resource to the science and engineering research community.' It will function 'both as a standalone platform and as a gateway to effectively utilizing other national resources, significantly accelerating AI-driven scientific discovery.' Nexus is expected to advance American leadership in artificial intelligence, growing the nation's capacity in 'diverse areas of science and engineering, enabling breakthrough discoveries, increasing economic competitiveness, and advancing human health.' 'Georgia Tech is proud to be one of the nation's leading sources of the AI talent and technologies that are powering a revolution in our economy,' said Georgia Tech President Ángel Cabrera, in the university's announcement. 'It's fitting we've been selected to host this new supercomputer, which will support a new wave of AI-centered innovation across the nation. We're grateful to the NSF, and we are excited to get to work.' Nexus will have enormous computing capacity. According to Georgia Tech, it will: 'The Nexus system's novel approach combining support for persistent scientific services with more traditional high-performance computing will enable new science and AI workflows that will accelerate the time to scientific discovery,' said Katie Antypas, NSF's director of the Office of Advanced Cyberinfrastructure. 'We look forward to adding Nexus to NSF's portfolio of advanced computing capabilities for the research community.' Georgia Tech will construct Nexus in partnership with the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign, home to several of the country's top academic supercomputers. The two universities will establish a new high-speed network, creating a national research infrastructure available for use by U.S. researchers, who will be able to apply for NSF support to access the supercomputer. 'Nexus is more than a supercomputer — it's a symbol of what's possible when leading institutions work together to advance science,' said Charles Isbell, chancellor of the University of Illinois and former dean of Georgia Tech's College of Computing. 'I'm proud that my two academic homes have partnered on this project that will move science, and society, forward." Plans call for construction to begin this year, with completion expected by spring 2026. Georgia Tech will manage Nexus, provide support, and reserve up to 10% of its capacity for its own campus research. 'This is a big step for Georgia Tech and for the scientific community,' said Vivek Sarkar, the John P. Imlay Dean of Computing. 'Nexus will help researchers make faster progress on today's toughest problems — and open the door to discoveries we haven't even imagined yet.'

The Number Of Questions That AGI And AI Superintelligence Need To Answer For Proof Of Intelligence
The Number Of Questions That AGI And AI Superintelligence Need To Answer For Proof Of Intelligence

Forbes

time17 hours ago

  • Science
  • Forbes

The Number Of Questions That AGI And AI Superintelligence Need To Answer For Proof Of Intelligence

How many questions will we need to ask AI to ascertain that we've reached AGI and ASI? In today's column, I explore an intriguing and unresolved AI topic that hasn't received much attention but certainly deserves considerable deliberation. The issue is this. How many questions should we be prepared to ask AI to ascertain whether AI has reached the vaunted level of artificial general intelligence (AGI) and perhaps even attained artificial superintelligence (ASI)? This is more than merely an academic philosophical concern. At some point, we should be ready to agree whether the advent of ASI and ASI have been reached. The likely way to do so entails asking questions of AI and then gauging the intellectual acumen expressed by the AI-generated answers. So, how many questions will we need to ask? Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. About Testing For Pinnacle AI Part of the difficulty facing humanity is that we don't have a surefire test to ascertain whether we have reached AGI and ASI. Some people proclaim rather loftily that we'll just know it when we see it. In other words, it's one of those fuzzy aspects and belies any kind of systematic assessment. An overall feeling or intuitive sense on our part will lead us to decide that pinnacle AI has been achieved. Period, end of story. But that can't be the end of the story since we ought to have a more mindful way of determining whether pinnacle AI has been attained. If the only means consists of a Gestalt-like emotional reaction, there is going to be a whole lot of confusion that will arise. You will get lots of people declaring that pinnacle AI exists, while lots of other people will insist that the declaration is utterly premature. Immense disagreement will be afoot. See my analysis of people who are already falsely believing that they have witnessed pinnacle AI, such as AGI and ASI, as discussed at the link here. Some form of bona fide assessment or test that formalizes the matter is sorely needed. I've extensively discussed and analyzed a well-known AI-insider test known as the Turing Test, see the link here. The Turing Test is named after the famous mathematician and early computer scientist Alan Turing. In brief, the idea is to ask questions of AI, and if you cannot distinguish the responses from those of what a human would say, you might declare that the AI exhibits intelligence on par with humans. Turing Test Falsely Maligned Be cautious if you ask an AI techie what they think of the Turing Test. You will get quite an earful. It won't be pleasant. Some believe that the Turing Test is a waste of time. They will argue that it doesn't work suitably and is outdated. We've supposedly gone far past its usefulness. You see, it was a test devised in 1949 by Alan Turing. That's over 75 years ago. Nothing from that long ago can apparently be applicable in our modern era of AI. Others will haughtily tell you that the Turing Test has already been successfully passed. In other words, the Turing Test has been purportedly passed by existing AI. Lots of banner headlines say so. Thus, the Turing Test isn't of much utility since we know that we don't yet have pinnacle AI, but the Turing Test seems to say that we do. I've repeatedly tried to set the record straight on this matter. The real story is that the Turing Test has been improperly applied. Those who claim the Turing Test has been passed are playing fast and loose with the famous testing method. Flaunting The Turing Test Part of the loophole in the Turing Test is that the number of questions and type of questions are unspecified. It is up to the person or team that is opting to lean into the Turing Test to decide those crucial facets. This causes unfortunate trouble and problematic results. Suppose that I decide to perform a Turing Test on ChatGPT, the immensely popular generative AI and large language model (LLM) that 400 million people are using weekly. I will seek to come up with questions that I can ask ChatGPT. I will also ask the same questions of my closest friend to see what answers they give. If I am unable to differentiate the answers from my human friend versus ChatGPT, I shall summarily and loudly declare that ChatGPT has passed the Turing Test. The idea is that the generative AI has successfully mimicked human intellect to the degree that the human-provided answers and the AI-provided answers were essentially the same. After coming up with fifty questions, some that were easy and some that were hard, I proceeded with my administration of the Turing Test. ChatGPT answered each question, and so did my friend. The answers by the AI and the answers by my friend were pretty much indistinguishable from each other. Voila, I can start telling the world that ChatGPT has passed the Turing Test. It only took me about an hour in total to figure that out. I spent half the time coming up with the questions, and half of the time getting the respective answers. Easy-peasy. The Number Of Questions Here's a thought for you to ponder. Do you believe that asking fifty questions is sufficient to determine whether intellectual acumen exists? That somehow doesn't seem sufficient. This is especially the case if we define AGI as a form of AI that is going to be intellectually on par with the entire range and depth of human intellect. Turns out that the questions I came up with for my run of the Turing Test didn't include anything about chemistry, biology, and many other disciplines or domains. Why didn't I include those realms? Well, I had chosen to compose just fifty questions. You cannot ask any semblance of depth and breadth across all human knowledge in a mere fifty questions. Sure, you could cheat and ask a question that implores the person or the AI to rattle off everything they know. In that case, presumably, at some point, the 'answer' would include chemistry, biology, etc. That's not a viable approach, as I discuss at the link here, so let's put aside the broad strokes questions and aim for specific questions rather than smarmy catch-all questions. How Many Questions Is Enough I trust that you are willing to concede that the number of questions is important when performing a test that tries to ascertain intellectual capabilities. Let's try to come up with a number that makes some sense. We can start with the number zero. Some believe that we shouldn't have to ask even one question. The AI has the onus to convince us that it has attained AGI or ASI. Therefore, we can merely sit back and see what the AI says to us. We either are ultimately convinced by the smooth talking, or we aren't. A big problem with the zero approach is that the AI could prattle endlessly and might simply be doing a dump of everything it has patterned on. The beauty of asking questions is that you get an opportunity to jump around and potentially find blank spots. If the AI is only spouting whatever it has to say, the wool could readily be pulled over your eyes. I suggest that we agree to use a non-zero count. We ought to ask at least one question. The difficulty with being constrained to one question is that we are back to the conundrum of either missing the boat and only hitting one particular nugget, or we are going to ask for the entire kitchen sink in an overly broad manner. None of those are satisfying. Okay, we must ask at least two or more questions. I dare say that two doesn't seem high enough. Does ten seem like enough questions? Probably not. What about one hundred questions? Still doesn't seem sufficient. A thousand questions? Ten thousand questions? One hundred thousand questions? It's hard to judge where the right number might be. Maybe we can noodle on the topic and figure out a ballpark estimate that makes reasonable sense. Let's do that. Recent Tests Of Top AI You might know that every time one of the top AI makers comes out with a new version of their generative AI, they run a bunch of various AI assessment tests to try and gleefully showcase how much better their AI is than other competing LLMs. For example, Grok 4 by Elon Musk's xAI was recently released, and xAI and others used many of the specialized tests that have become relatively popular to see how well Grok 4 compares. Tests included the (a) Humanity's Last Exam or HLE, (b) ARC-AGI-2, (c) GPQA, (d) USAMO 2025, (e) AIME 2025, (f) LiveCodeBench, (g) SWE-Bench, and other such tests. Some of those tests have to do with the AI being able to generate program code (e.g., LiveCodeBench, SWE-Bench). Some of the tests are about being able to solve math problems (e.g., USAMO, AIME). The GPQA test is science-oriented. Do you know how many questions are in the GPQA testing set? There is a total of 546 questions, consisting of 448 questions in the Main Set and another 198 questions in the harder Diamond Set. If you are interested in the nature of the questions in GPQA, visit the GPQA GitHub site, plus you might find of interest the initial paper entitled 'GPQA: A Graduate-Level Google-Proof Q&A Benchmark' by David Rein et al, arXiv, November 20, 2023. Per that paper: 'We present GPQA, a challenging dataset of 448 multiple choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely difficult: experts who have or are pursuing PhDs in the corresponding domains reach 65% accuracy (74% when discounting clear mistakes the experts identified in retrospect), while highly skilled non-expert validators only reach 34% accuracy, despite spending on average over 30 minutes with unrestricted access to the web (i.e., the questions are 'Google-proof').' Please be aware that you are likely to hear some eyebrow-raising claims that a generative AI is better than PhD-level graduate students across all domains because of particular scores on the GPQA test. It's a breathtakingly sweeping statement and misleadingly portrays the actual testing that is normally taking place. In short, any such proclamation should be taken with a humongous grain of salt. Ballparking The Questions Count Suppose we come up with our own handy-dandy test that has PhD-level questions. The test will have 600 questions in total. We will craft 600 questions pertaining to 6 domains, evenly so, and we'll go with the six domains of (1) physics, (2) chemistry, (3) biology, (4) geology, (5) astronomy, and (6) oceanography. That means we are going to have 100 questions in each discipline. For example, there will be 100 questions about physics. Are you comfortable that by asking a human being a set of 100 questions about physics that we will be able to ascertain the entire range and depth of their full knowledge and intellectual prowess in physics? I doubt it. You will certainly be able to gauge a semblance of their physics understanding. The odds are that with just 100 questions, you are only sampling their knowledge. Is that a large enough sampling, or should we be asking even more questions? Another consideration is that we are only asking questions regarding 6 domains. What about all the other domains? We haven't included any questions on meteorology, anthropology, economics, political science, archaeology, history, law, linguistics, etc. If we want to assess an AI such as the hoped-for AGI, we presumably need to cover every possible domain. We also need to have a sufficiently high count of questions per domain so that we are comfortable that our sampling is going deep and wide. Devising A Straw Man Count Go with me on a journey to come up with a straw man count. Our goal will be an order-of-magnitude estimate, rather than an exact number per se. We want to have a ballpark, so we'll know what the range of the ballpark is. We will begin the adventure by noting that the U.S. Library of Congress has an extensive set of subject headings, commonly known as the LCSH (Library of Congress Subject Headings). The LCSH was started in 1897 and has been updated and maintained since then. The LCSH is generally considered the most widely used subject vocabulary in the world. As an aside, some people favor the LCSH and some do not. There are heated debates about whether certain subject headings are warranted. There are acrimonious debates concerning the wording of some of the subject headings. On and on the discourse goes. I'm not going to wade into that quagmire here. The count of the LCSH as of April 2025 was 388,594 records in size. I am going to round that number to 400,000, for the sake of this ballpark discussion. We can quibble about that, along with quibbling whether all those subject headings are distinctive and usable, but I'm not taking that route for now. Suppose we came up with one question for each of the LCSH subject headings, such that whatever that domain or discipline consists of, we are going to ask one question about it. We would then have 400,000 questions ready to be asked. One question per realm doesn't seem sufficient. Consider these possibilities: If we pick the selection of having 10,000 questions per the LCSHs, we will need to come up with 4 billion questions. That's a lot of questions. But maybe only asking 10,000 questions isn't sufficient for each realm. We might go with 100,000 questions, which then brings the grand total to 40 billion questions. Gauging AGI Via Questions Does asking a potential AGI a billion or many billions of questions, i.e., 4B to 40B, that are equally varied across all 'known' domains, seem to be a sufficient range and depth of testing? Some critics will say that it is hogwash. You don't need to ask that many questions. It is vast overkill. You can use a much smaller number. If so, what's that number? And what is the justification for that proposed count? Would the number be on the order of many thousands or millions, if not in the billions? And don't try to duck the matter by saying that the count is somehow amorphous or altogether indeterminate. In the straw man case of billions, skeptics will say that you cannot possibly come up with a billion or more questions. It is logistically infeasible. Even if you could, you would never be able to assess the answers given to those questions. It would take forever to go through those billions of answers. And you need experts across all areas of human knowledge to judge whether the answers were right or wrong. A counterargument is that we could potentially use AI, an AI other than the being tested AGI, to aid in the endeavor. That too has upsides and downsides. I'll be covering that consideration in an upcoming post. Be on the watch. There are certainly a lot of issues to be considered and dealt with. The extraordinarily serious matter at hand is worthy of addressing these facets. Remember, we are focusing on how we will know that we've reached AGI. That's a monumental question. We should be prepared to ask enough questions that we can collectively and reasonably conclude that AGI has been attained. As Albert Einstein aptly put it: 'Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning.'

Hong Kong's future lies in being the finance launch pad for tomorrow's tech
Hong Kong's future lies in being the finance launch pad for tomorrow's tech

South China Morning Post

timea day ago

  • Business
  • South China Morning Post

Hong Kong's future lies in being the finance launch pad for tomorrow's tech

For years, Hong Kong's policy narrative has leaned heavily into national security. From the national security law to Article 23 legislation, the emphasis on stability and control has reshaped the city's global identity. But with geopolitical tensions showing signs of stabilising, the moment is ripe for a strategic shift. Advertisement The next global chapter isn't just about containment and scarcity, it is also about creation and abundance. The rise of artificial intelligence (AI) and digital platforms means abundance can be more evenly distributed, turning technological promise into tangible benefits for both urban centres and rural communities. Hong Kong's future lies in becoming a launch pad for the financial engines of the new technology economy for the next generation, from deep-tech funding to AI-powered green finance. Just as steam power revolutionised Britain's industrial landscape, today's equivalents – batteries , nuclear power generation and solar infrastructure – are poised to redefine global growth. These are not niche technologies; they are the backbone of a new era. Battery development is triggering a transformation across supply chains, from rare earth extraction and refinement to mobility and storage solutions. Hong Kong's financial sector should be underwriting this revolution, crafting instruments that support cross-border logistics and deep-tech ventures. Nuclear power, though politically sensitive, is likely to remain essential to clean energy. Small modular reactors are gaining traction globally, and Hong Kong could position itself as a financing and regulatory sandbox – a neutral and welcoming playground for capital and collaboration. Advertisement Solar power , meanwhile, presents a different opportunity. It is the fastest-growing energy source globally, abundant and safe when compared to nuclear, yet free from strong strategic entanglement. As major nations consider industrial policy, they often weigh whether a sector is strategic, profitable and winnable. Solar technology and production is arguably geopolitically frictionless and commercially scalable. Financing this sector is something Hong Kong can do well.

Carlsmed seeks up to $103M IPO haul to fuel spine surgery growth
Carlsmed seeks up to $103M IPO haul to fuel spine surgery growth

Yahoo

time2 days ago

  • Business
  • Yahoo

Carlsmed seeks up to $103M IPO haul to fuel spine surgery growth

This story was originally published on MedTech Dive. To receive daily news and insights, subscribe to our free daily MedTech Dive newsletter. Dive Brief: Carlsmed has set the target range for its initial public offering, outlining plans to raise up to $103.3 million to support development and commercialization of its spine surgery platform. Carlsmed plans to offer 6.7 million shares of common stock at an expected range of $14 to $16 per share, as well as an underwriters' option to purchase more than 1 million shares. The company, which set the price range Tuesday, sells artificial intelligence-enabled software, custom implants and single-use instruments for spine surgeries. Cross-trial comparisons suggest the custom devices may improve alignment and reduce revisions compared to stock implants. Carlsmed's products compete with stock spine implants sold by companies including Medtronic, Johnson & Johnson and Globus Medical. The company is much smaller than its rivals but growing quickly, with sales increasing by almost 100% in 2024 and on track to rise again this year. Dive Insight: Carlsmed is developing its platform to address the limitations of traditional spine fusion procedures. The company has identified the lack of sufficient pre-operative planning, the fit of stock interbody implants and complicated surgical workflows as problems. Targeting a $13.4 billion addressable market, Carlsmed is working to address these issues to improve patient outcomes and cut healthcare costs. The resulting platform uses diagnostic imaging and AI-enabled algorithms to develop personalized digital surgical plans and design custom interbody implants for each patient. Carlsmed collects real-world, post-operative data to improve the planning process. Researchers have generated evidence of the effectiveness of the platform, including through a registry that is tracking real-world clinical outcomes. An interim analysis of 67 adult spinal deformity patients in the registry found the rate of revision surgery attributable to mechanical complications was 1.5% after a mean follow-up of 14.7 months, according to a federal securities filing. The one-year revision rate in another study of stock implants was 8.7%. Carlsmed reported revenue of $27.2 million last year, up almost 100% compared to 2023, and is on track to grow again in 2025. The company generated sales of around $22.2 million over the first half of 2025. Carlsmed plans to keep growing by adding users and increasing use by existing customers. The number of surgeons who have used the platform increased from 103 as of March 2024 to 199 as of June 2025. Carlsmed's IPO paperwork reveals some of the challenges the company is facing as it scales operations. The company's net loss increased last year, rising to $24.2 million, and its reliance on a limited number of contract manufacturing organizations has hurt margins. Delays in the approval of surgical plans or late changes in surgery dates can cause CMOs to charge expedite fees. The fees drove margins down in the second quarter. Carlsmed expects expedite fees to decrease over time as it tries to improve operational processes and procedural workflows used by surgeons. Carlsmed's filing to list on Nasdaq adds to the uptick in medtech IPO activity seen this year. IPO filings are still well down from the pandemic-era peak but, after years of limited activity, some analysts believe a backlog of companies is waiting for favorable conditions to list. The success or failure of companies such as Carlsmed could inform whether the IPO window opens or closes. Recommended Reading The medtech IPO window is finally open. Or is it? Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

How China's open-source AI is helping DeepSeek, Alibaba take on Silicon Valley
How China's open-source AI is helping DeepSeek, Alibaba take on Silicon Valley

South China Morning Post

time2 days ago

  • Business
  • South China Morning Post

How China's open-source AI is helping DeepSeek, Alibaba take on Silicon Valley

July 9, 2024, may be remembered as a day of humiliation for China's artificial intelligence community. On that day, US start-up OpenAI, the global leader in AI model development, blocked developers in China – including Hong Kong and Macau – from using its GPT models. Advertisement In contrast, developers from countries ranging from Afghanistan to Zimbabwe were given access, reflecting OpenAI's unspoken belief that its valuable models must be safeguarded against misuse by China, along with Iran, Russia and North Korea. Now the tide has turned. With the December 2024 launch of DeepSeek's free-for-all V3 large language model (LLM) and the January release of DeepSeek's R1, an AI reasoning model that rivals the capabilities of OpenAI's o1, the open-source movement started by Chinese firms has sent shock waves through Silicon Valley and Wall Street. The trend has not only unleashed a wave of AI applications in China, but also redefined the global AI landscape, winning the support of developers worldwide. Chinese open-source models present a viable alternative to the closed-off systems championed by US tech giants like OpenAI and Google. Open-source AI models – whose source code and model weights are available for anyone to use, modify, and distribute – encourage a collaborative approach to AI development. Advertisement While in the past, open-source computer systems like Linux failed to replace proprietary competitors like Microsoft's Windows, analysts said that this time around, China's free-to-use AI models posed a serious challenge to US counterparts.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store